Loading...
 

Selection of the solver based on problem kind

The finite element method can be used to solve many engineering problems. The general scheme of the method's operation is similar, regardless of the problem considered. In our case, for the sake of simplicity, we have chosen the bitmap projection problem.
The basic steps of the finite element method are listed below. The last step is to select an algorithm to solve the generated system of equations. This algorithm depends on the matrix structure, which indirectly depends on the engineering problem under consideration.


  1. Selecting a differential equation describing an engineering problem or a physical phenomenon we want to simulate.
  2. The simplest possible engineering problem is a projection problem, such as the bitmap projection problem described in Chapter One. In this problem, the differential equation is replaced with the problem: Find a function \( u\in C^1(\Omega) \) one that approximates the given bitmap, i.e. \( u(x,y)\approx BITMAP(x,y),\textrm{ where } \Omega \) is the area where our engineering problem is defined.
  3. The next step is to transform our problem into a so-called weak form, in which this is our equation \( u(x,y)\approx BITMAP(x,y),\textrm{ where } \Omega \) it is sampled using a number of so-called test functions \( v(x,y) \) and averaged using integrals. In our exemplary bitmap projection problem, the weak form is the following: \( \int_{\Omega} u(x,y)v(x,y)dxdy=\int_{\Omega} BITMAP(x,y)v(x,y)dxdy \) where \( v(x,y) \) is a test function. This equation must be satisfied for all test functions we have chosen.
  4. The next stage is the generation of the finite element mesh covering the given area and selecting the base functions spanning the finite elements. At this stage, the area in which we solve our engineering problem is divided into finite elements. Then, on the finite elements, we span selected basic functions, the linear combination of which will approximate our desired function \( u \) In our bitmap projection example described in chapter one, area \( \Omega \) that is, the rectangle on which the bitmap is spanned, is divided into \( N_x\times N_y \) rectangular elements. Next, we span the B-spline base functions of the second order on this area, using the vector wick [0 0 0 1 2 3 ... N_x-1 N_x N_x N_x] in the direction of \( x \) axis, and knot vector [0 0 0 1 2 3 ... N_y-1 N_y N_y N_y] in the direction of \( y \) axis. So we have databases of two-dimensional B-spline functions \( B_{i,j}(x,y)=B^x_i(x)B^y_j(y), i=1,...,N_x, j=1,...,N_y \).
  5. We now discretize the weak formulation with the help of selected base functions stretched on a computational grid. The function you are looking for \( u \) we express as a linear combination of basis functions. Likewise, we choose individual base functions for individual testing functions. In the bitmap projection example described in Chapter One, the B-spline base functions selected in the previous step are now used to approximate the solution \( u=\sum_{i=1,...,N_x;j=1,...,N_y} a_{i,j }B^x_i(x)B^y_j(y) \), similarly, we choose B-spline functions as test functions \( B^x_k(x)B^y_l(y) \). Thus we obtain the following system of equations \( \begin{bmatrix} \int{B^x_{1,p}B^y_{1,p}}{B^x_{1,p}B^y_{1,p}} & \int{B^x_{1,p}B^y_{1,p}}{B^x_{2,p}B^y_{1,p}} & \cdots & \int{B^x_{1,p}B^y_{1,p}}{B^x_{N_x,p}B^y_{N_y,p}} \\ \int{B^x_{2,p}B^y_{1,p}}{B^x_{1,p}B^y_{1,p}} & \int{B^x_{2,p}B^y_{1,p}}{B^x_{2,p}B^y_{1,p}} & \cdots & \int{B^x_{2,p}B^y_{1,p}}{B^x_{N_x,p}B^y_{N_y,p}} \\ \vdots & \vdots & \vdots & \vdots \\ \int{B^x_{N_x,p}B^y_{N_y,p}}{B^x_{1,p}B^y_{1,p}} & \int{B^x_{N_x,p}B^y_{N_y,p}}{B^x_{2,p}B^y_{1,p}} & \cdots & \int{B^x_{N_x,p}B^y_{N_y,p}}{B^x_{N_x,p}B^y_{N_y,p}} \end{bmatrix}\begin{bmatrix} {\color{red}u_{1,1}} \\ {\color{red}u_{2,1}} \\ \vdots \\ {\color{red}u_{N_x,N_y}} \end{bmatrix} = \\ = \begin{bmatrix} \int BITMAP(x,y) {\color{blue}B^x_1(x)*B^y_1(y)} dx \\ \int BITMAP(x,y) {\color{blue}B^x_1(x)*B^y_2(y)} dx \\ \vdots \\ \int BITMAP(x,y) {\color{blue}B^x_{N_x}(x)*B^y_{N_y}(y)} dx \\ \end{bmatrix} \)
  6. The last stage is the solution of the generated system of equations.

We have six representative solver algorithms for solving equation systems generated during finite element calculations. They are the Gaussian elimination algorithm, the LU factorization algorithm, the frontal solver algorithm, the multi-frontal solver algorithm, the variable-directional solver algorithm, and the iterative algorithm.


For problems where the matrices are symmetric and positively defined, iterative algorithms based on Krylov's methods are often used, for example the conjugate gradients algorithm. There are also iterative algorithms for a wider class of problems, such as the GMRES algorithm.
Iterative solvers algorithms are described in a very extensive way in the textbook of Yousef Saad [1].
The number of iterations for iterative solvers depends on the type of problem being solved, on the structure and properties of the matrix after discretization. In particular, the convergence of the conjugate gradients solver is influenced by the matrix condition number, i.e., the product of the modulus of the highest eigenvalue of the matrix and the modulus of the highest eigenvalue of the inverse matrix. If the matrix condition number is too large, then so-called preconditioners are created, i.e., matrices by which our system of equations is multiplied in such a way that the solution is the same, but the matrix conditioning factor is better. Unfortunately, the problem of selecting a preconditioner depends on the problem being solved (there are no universal algorithms that are always effective).
Exact solvers in turn can solve any problem; they can factorize each matrix, but their cost is usually greater than the cost of iterative solvers. For homogeneous two-dimensional meshes, the cost is of the order
\( {\cal O } ( N^{1.5 }) \). For homogeneous three-dimensional meshes, the cost is in the order \( {\cal O }(N^2) \). The exception is the alternating-direction solver algorithm, the cost of which is linear \( {\cal O }(N) \).
The alternating-direction solver algorithm can be used when the matrix has a Kronecker product structure. This is the case, for example, in the projection problem, or during non-stationary simulations using the explicit method.
The multi-frontal solver algorithm is used for problems for which the matrices are not symmetric or are not positively defined. The multi-frontal solver algorithm is a special implementation of the Gaussian elimination or LU factorization algorithm.
The iterative version of the alternating-direction algorithm described in this manual is used to simulate explicitly on irregular geometries.


Ostatnio zmieniona Środa 06 z Październik, 2021 06:59:13 UTC Autor: Maciej Paszynski
Zaloguj się/Zarejestruj w OPEN AGH e-podręczniki
Czy masz już hasło?

Hasło powinno mieć przynajmniej 8 znaków, litery i cyfry oraz co najmniej jeden znak specjalny.

Przypominanie hasła

Wprowadź swój adres e-mail, abyśmy mogli przesłać Ci informację o nowym haśle.
Dziękujemy za rejestrację!
Na wskazany w rejestracji adres został wysłany e-mail z linkiem aktywacyjnym.
Wprowadzone hasło/login są błędne.